Nature inspired algorithms has brought solutions to complex problems in optimization where the optimization and solution of complex problems is highly complex and nonlinear. There is a need to use proper design of the cost function or the fitness function in terms of the parameters to be optimized, this can be used in solving any type of such problems. In this paper the nature inspired algorithms has played important role in the optimal design of antenna array with improved radiation characteristics. In this paper, 20 elements linearly spaced array is used as an example of nature inspired optimization in antenna array system. This bridge inspired army ant algorithm(NOABS) is used to reduce the side lobes and to improve the other radiation characteristics to show the effect of the optimization on design characteristics by implementation of NOABS nature inspired algorithm. The entire simulation is carried out on 20 elements linear antenna array.
translated by 谷歌翻译
The aim of this paper is to introduce AHCOA to the electromagnetic and antenna community. AHCOA is a new nature inspired meta heuristic algorithm inspired by how there is a hierarchy and departments in the ant hill colonization. It has high probabilistic potential in solving not only unconstrained but also constrained optimization problems. In this paper the AHCOA is applied to linear antenna array for better pattern synthesis in the following ways : By uniform excitation considering equal spacing of the antenna elements with respect to the uniform array. AHCOA is used in obtaining an array pattern to achieve minimum side lobe levels. The results are compared to other state of the art nature based algorithms such as ant lion optimizer, which show a considerable improvement in AHCOA.
translated by 谷歌翻译
This paper aims to introduce the Ant hill colonization optimization algorithm(AHCOA) to the electromagnetics and antenna community. The ant hill is built by special species of ants known as formicas ants(also meadow ants, fire ants and harvester ants). AHCOA is a novel new nature inspired algorithm mimicking how the ants built and sustain the ant hill for their survival and sustenance for many years. This problem solves constrained and unconstrained optimization problems with wide capability in diverse fields. AHCOA is used by writing equations of volumetric analysis of the ant hill mould the manner in which the structure is architected. In this paper, we have shown how AHCOA is better than the previous paper on ant lion optimizer for controlling side lobe in antenna pattern synthesis in paper [1]. The potential of AHCOA in synthesizing and analyzing for d/ varying from 1.1,0.6,0.5,0.3 and 0.1 linear array is also illustrated. Antenna side lobe level minimization is compared with ant lion optimizer showing why AHCOA is better than the previously simulated ant lion optimizer for side lobe control. The results show why linear arrays are better synthesized for AHCOA then other algorithms used in planar arrays. This paper shows why AHCOA is a strong candidate for antenna optimization used in linear arrays.
translated by 谷歌翻译
Automated offensive language detection is essential in combating the spread of hate speech, particularly in social media. This paper describes our work on Offensive Language Identification in low resource Indic language Marathi. The problem is formulated as a text classification task to identify a tweet as offensive or non-offensive. We evaluate different mono-lingual and multi-lingual BERT models on this classification task, focusing on BERT models pre-trained with social media datasets. We compare the performance of MuRIL, MahaTweetBERT, MahaTweetBERT-Hateful, and MahaBERT on the HASOC 2022 test set. We also explore external data augmentation from other existing Marathi hate speech corpus HASOC 2021 and L3Cube-MahaHate. The MahaTweetBERT, a BERT model, pre-trained on Marathi tweets when fine-tuned on the combined dataset (HASOC 2021 + HASOC 2022 + MahaHate), outperforms all models with an F1 score of 98.43 on the HASOC 2022 test set. With this, we also provide a new state-of-the-art result on HASOC 2022 / MOLD v2 test set.
translated by 谷歌翻译
We consider semi-supervised binary classification for applications in which data points are naturally grouped (e.g., survey responses grouped by state) and the labeled data is biased (e.g., survey respondents are not representative of the population). The groups overlap in the feature space and consequently the input-output patterns are related across the groups. To model the inherent structure in such data, we assume the partition-projected class-conditional invariance across groups, defined in terms of the group-agnostic feature space. We demonstrate that under this assumption, the group carries additional information about the class, over the group-agnostic features, with provably improved area under the ROC curve. Further assuming invariance of partition-projected class-conditional distributions across both labeled and unlabeled data, we derive a semi-supervised algorithm that explicitly leverages the structure to learn an optimal, group-aware, probability-calibrated classifier, despite the bias in the labeled data. Experiments on synthetic and real data demonstrate the efficacy of our algorithm over suitable baselines and ablative models, spanning standard supervised and semi-supervised learning approaches, with and without incorporating the group directly as a feature.
translated by 谷歌翻译
We study the learning dynamics of self-predictive learning for reinforcement learning, a family of algorithms that learn representations by minimizing the prediction error of their own future latent representations. Despite its recent empirical success, such algorithms have an apparent defect: trivial representations (such as constants) minimize the prediction error, yet it is obviously undesirable to converge to such solutions. Our central insight is that careful designs of the optimization dynamics are critical to learning meaningful representations. We identify that a faster paced optimization of the predictor and semi-gradient updates on the representation, are crucial to preventing the representation collapse. Then in an idealized setup, we show self-predictive learning dynamics carries out spectral decomposition on the state transition matrix, effectively capturing information of the transition dynamics. Building on the theoretical insights, we propose bidirectional self-predictive learning, a novel self-predictive algorithm that learns two representations simultaneously. We examine the robustness of our theoretical insights with a number of small-scale experiments and showcase the promise of the novel representation learning algorithm with large-scale experiments.
translated by 谷歌翻译
Commonsense knowledge-graphs (CKGs) are important resources towards building machines that can 'reason' on text or environmental inputs and make inferences beyond perception. While current CKGs encode world knowledge for a large number of concepts and have been effectively utilized for incorporating commonsense in neural models, they primarily encode declarative or single-condition inferential knowledge and assume all conceptual beliefs to have the same likelihood. Further, these CKGs utilize a limited set of relations shared across concepts and lack a coherent knowledge organization structure resulting in redundancies as well as sparsity across the larger knowledge graph. Consequently, today's CKGs, while useful for a first level of reasoning, do not adequately capture deeper human-level commonsense inferences which can be more nuanced and influenced by multiple contextual or situational factors. Accordingly, in this work, we study how commonsense knowledge can be better represented by -- (i) utilizing a probabilistic logic representation scheme to model composite inferential knowledge and represent conceptual beliefs with varying likelihoods, and (ii) incorporating a hierarchical conceptual ontology to identify salient concept-relevant relations and organize beliefs at different conceptual levels. Our resulting knowledge representation framework can encode a wider variety of world knowledge and represent beliefs flexibly using grounded concepts as well as free-text phrases. As a result, the framework can be utilized as both a traditional free-text knowledge graph and a grounded logic-based inference system more suitable for neuro-symbolic applications. We describe how we extend the PrimeNet knowledge base with our framework through crowd-sourcing and expert-annotation, and demonstrate its application for more interpretable passage-based semantic parsing and question answering.
translated by 谷歌翻译
Self-supervised learning (SSL) methods such as WavLM have shown promising speech separation (SS) results in small-scale simulation-based experiments. In this work, we extend the exploration of the SSL-based SS by massively scaling up both the pre-training data (more than 300K hours) and fine-tuning data (10K hours). We also investigate various techniques to efficiently integrate the pre-trained model with the SS network under a limited computation budget, including a low frame rate SSL model training setup and a fine-tuning scheme using only the part of the pre-trained model. Compared with a supervised baseline and the WavLM-based SS model using feature embeddings obtained with the previously released 94K hours trained WavLM, our proposed model obtains 15.9% and 11.2% of relative word error rate (WER) reductions, respectively, for a simulated far-field speech mixture test set. For conversation transcription on real meeting recordings using continuous speech separation, the proposed model achieves 6.8% and 10.6% of relative WER reductions over the purely supervised baseline on AMI and ICSI evaluation sets, respectively, while reducing the computational cost by 38%.
translated by 谷歌翻译
Pre-training large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. Although this method has proven to be effective for many domains, it might not always provide desirable benefits. In this paper, we study the effects of hateful pre-training on low-resource hate speech classification tasks. While previous studies on the English language have emphasized its importance, we aim to augment their observations with some non-obvious insights. We evaluate different variations of tweet-based BERT models pre-trained on hateful, non-hateful, and mixed subsets of a 40M tweet dataset. This evaluation is carried out for the Indian languages Hindi and Marathi. This paper is empirical evidence that hateful pre-training is not the best pre-training option for hate speech detection. We show that pre-training on non-hateful text from the target domain provides similar or better results. Further, we introduce HindTweetBERT and MahaTweetBERT, the first publicly available BERT models pre-trained on Hindi and Marathi tweets, respectively. We show that they provide state-of-the-art performance on hate speech classification tasks. We also release hateful BERT for the two languages and a gold hate speech evaluation benchmark HateEval-Hi and HateEval-Mr consisting of manually labeled 2000 tweets each. The models and data are available at https://github.com/l3cube-pune/MarathiNLP .
translated by 谷歌翻译
我们介绍了StreamNet,这是一种自动编码器体系结构,用于分析大量白质流线的高度异质几何形状。该提出的框架利用了Wasserstein-1度量的几何形状赋值特性,以实现整个流线束的直接编码和重建。我们表明,该模型不仅可以准确捕获人群中流线的分布结构,而且还能够在真实和合成流线之间实现出色的重建性能。使用最新的ART捆绑包比较度量标准,对40个健康对照的T1加权扩散成像产生的白质流线评估了实验模型性能。
translated by 谷歌翻译